Goto

Collaborating Authors

 football newsletter business environment uk


Palantir extends reach into British state as it gets access to sensitive FCA data

The Guardian

Palantir, co-founded by the billionaire Donald Trump donor Peter Thiel (pictured), has been appointed for a three-month trial period. Palantir, co-founded by the billionaire Donald Trump donor Peter Thiel (pictured), has been appointed for a three-month trial period. Sun 22 Mar 2026 12.00 EDTLast modified on Sun 22 Mar 2026 22.30 EDT Palantir is to be granted access to a trove of highly sensitive UK financial regulation data, in a deal that has prompted fresh concerns about the US AI companyâ s deepening reach into the British state, the Guardian can reveal. The Financial Conduct Authority (FCA) has awarded Palantir a contract to investigate the watchdogâ s internal intelligence data in an effort to help it tackle financial crime, which includes investigating fraud, money laundering and insider trading. The Miami-based company, co-founded by the billionaire Donald Trump donor Peter Thiel, has been appointed for a three-month trial, paying more than £30,000 a week to analyse the FCAâ s vast â data lakeâ, which could lead to a full procurement of an AI system.


Meta AI agent's instruction causes large sensitive data leak to employees

The Guardian

The data leak triggered a major internal security alert inside Meta. The data leak triggered a major internal security alert inside Meta. Fri 20 Mar 2026 02.00 EDTLast modified on Fri 20 Mar 2026 03.03 EDT An AI agent instructed an engineer to take actions that exposed a large amount of Meta's sensitive data to some of its employees, in the latest example of AI causing upheaval in a large tech company. The leak, which Meta confirmed, happened when an employee asked for guidance on an engineering problem on an internal forum. An AI agent responded with a solution, which the employee implemented - causing a large amount of sensitive user and company data to be exposed to its engineers for two hours.


Essex police pause facial recognition camera use after study finds racial bias

The Guardian

Academics discover black people'significantly more likely' to be identified when compared with other ethnic groups Essex police have paused the use of live facial recognition (LFR) technology after a study found cameras were significantly more likely to target black people than people of other ethnicities. The move to suspend use of the AI-enabled systems was revealed by the Information Commissioner's Office (ICO), which regulates the use of the technology deployed so far by at least 13 police forces in London, south and north Wales, Leicestershire, Northamptonshire, Hampshire, Bedfordshire, Suffolk, Greater Manchester, West Yorkshire, Surrey and Sussex. The ICO said Essex police had paused LFR deployments "after identifying potential accuracy and bias risks" and warned other forces to have mitigations in place. LFR systems are either mounted to fixed locations or deployed in vans. In January, the home secretary, Shabana Mahmood, announced the number of LFR vans would increase five-fold, with 50 available to every police force in England and Wales. Essex commissioned University of Cambridge academics to conduct a study, which involved 188 actors walking past cameras being actively deployed from marked police vans in Chelmsford.


US startup advertises 'AI bully' role to test patience of leading chatbots

The Guardian

The job's only prerequisite is having an'extensive personal history of being let down by technology'. The job's only prerequisite is having an'extensive personal history of being let down by technology'. US startup advertises'AI bully' role to test patience of leading chatbots Imagine a day at work where your main task is to pick a fight with a computer. No meetings, no emails - just you, a chair and a chatbot with the maddening tendency to think it has the cleverest mind in the room. The job title alone raises an eyebrow: "AI bully".


UK must learn lessons from AI race and retain its quantum computing talent, says minister

The Guardian

In quantum computers, the information is contained in qubits that can work through vast numbers of different outcomes, which is not possible with classical computers. In quantum computers, the information is contained in qubits that can work through vast numbers of different outcomes, which is not possible with classical computers. The UK will not let quantum computing talent slip through its fingers and must learn lessons from US dominance of the AI race, the technology secretary has said, as the government announced a £1bn quantum funding pledge. Liz Kendall said the government hoped to retain homegrown quantum startups, engineers and researchers rather than lose them to competing countries, with the US stealing a march on its western rivals in AI. "I do look at what's happened on AI," said Kendall. "I do think we need to learn the lessons and make sure we give our brilliant scientists, spinouts and startups the ability to stay here and make it happen. And that requires a government that is bold and ambitious and confident in these technologies of the future."


Child abuse material 'systemic' on Elon Musk's X amid Grok scandal, Australian online safety regulator warned

The Guardian

Australia's eSafety commissioner wrote to X in January after its AI chatbot Grok was used to generate sexualised images of women and children online. Australia's eSafety commissioner wrote to X in January after its AI chatbot Grok was used to generate sexualised images of women and children online. Child abuse material'systemic' on Elon Musk's X amid Grok scandal, Australian online safety regulator warned The Australian online safety regulator warned Elon Musk's X amid the Grok sexualised image generation scandal that it found child abuse material was "particularly systemic" on X and more accessible than on "any other mainstream service", correspondence obtained by Guardian Australia reveals. The eSafety commissioner wrote to X in January after its chatbot, Grok, was used to generate sexualised images of women and children online, which the prime minister, Anthony Albanese, described as "abhorrent". In the letter, obtained by Guardian Australia under freedom of information laws, eSafety's general manager of regulatory operations, Heidi Snell, pointed to Musk's promise when taking over the platform in 2022 that "removing child exploitation is priority #1", but said "the availability of CSEM [child sexual exploitation material] continues to appear particularly systemic on X".


Google scraps AI search feature that crowdsourced amateur medical advice

The Guardian

Google had said'What People Suggest' feature aimed to provide users with information from people with similar lived experiences. Google had said'What People Suggest' feature aimed to provide users with information from people with similar lived experiences. Google has dropped a new artificial intelligence search feature that gave users crowdsourced health advice from amateurs around the world. The company had said its launch of "What People Suggest", which provided tips from strangers, showed "the potential of AI to transform health outcomes across the globe". But Google has since quietly removed the feature, according to three people familiar with the decision.


Meta reportedly plans sweeping layoffs as AI costs increase

The Guardian

Meta is planning sweeping layoffs that could affect 20% or more of the company, three sources familiar with the matter told Reuters, as Meta seeks to offset costly artificial intelligence infrastructure bets and prepare for greater efficiency brought about by AI-assisted workers. No date has been set for the cuts and the magnitude has not been finalized, the people said. Top executives have recently signaled the plans to other senior leaders at Meta and told them to begin planning how to pare back, two of the people said. The sources spoke anonymously because they were not authorized to disclose the cuts. Meta did not immediately comment.


Anthropic-Pentagon battle shows how big tech has reversed course on AI and war

The Guardian

Less than a decade ago, Google employees scuttled any military use of its AI. The standoff between Anthropic and the Pentagon has forced the tech industry to once again grapple with the question of how its products are used for war - and what lines it will not cross. Amid Silicon Valley's rightward shift under Donald Trump and the signing of lucrative defense contracts, big tech's answer is looking very different than it did even less than a decade ago. Anthropic's feud with the Trump administration escalated three days ago as the AI firm sued the Department of Defense, claiming that the government's decision to blacklist it from government work violated its first amendment rights. The company and the Pentagon have been locked in a months-long standoff, with Anthropic attempting to prohibit its AI model from being used for domestic mass surveillance or fully autonomous lethal weapons.


Microsoft backs AI firm Anthropic in legal battle against Pentagon

The Guardian

Microsoft has thrown its weight behind Anthropic's legal challenge against the US Department of Defense. Microsoft has thrown its weight behind Anthropic's legal challenge against the US Department of Defense. Tech company files amicus brief in support of Anthropic's effort to overturn an aggressive Pentagon designation Microsoft has thrown its weight behind Anthropic's legal challenge against the Pentagon, filing a court brief in support of the AI company's effort to overturn an aggressive designation that effectively bars it from government work. In an amicus brief submitted to a federal court in San Francisco this week, Microsoft, which integrates Anthropic's AI tools into systems it provides to the US military, argued that a temporary restraining order was necessary to prevent serious disruption to suppliers whose products rely on the AI company's technology. Google, Amazon, Apple and OpenAI have also signed on to a brief in support of Anthropic. In a statement to the Guardian, Microsoft said: "The Department of War needs reliable access to the country's best technology.